5 research outputs found

    Automatic Search for Photoacoustic Marker Using Automated Transrectal Ultrasound

    Full text link
    Real-time transrectal ultrasound (TRUS) image guidance during robot-assisted laparoscopic radical prostatectomy has the potential to enhance surgery outcomes. Whether conventional or photoacoustic TRUS is used, the robotic system and the TRUS must be registered to each other. Accurate registration can be performed using photoacoustic (PA markers). However, this requires a manual search by an assistant [19]. This paper introduces the first automatic search for PA markers using a transrectal ultrasound robot. This effectively reduces the challenges associated with the da Vinci-TRUS registration. This paper investigated the performance of three search algorithms in simulation and experiment: Weighted Average (WA), Golden Section Search (GSS), and Ternary Search (TS). For validation, a surgical prostate scenario was mimicked and various ex vivo tissues were tested. As a result, the WA algorithm can achieve 0.53 degree average error after 9 data acquisitions, while the TS and GSS algorithm can achieve 0.29 degree and 0.48 degree average errors after 28 data acquisitions.Comment: 13 pages, 9 figure

    Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy

    Full text link
    Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modelity in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker method, where the ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument Methods: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principles of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. Results: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics. The result is also comparable with our previous approach, and the proposed method can be implemented with a normal white light stereo camera and doesn't require highly accurate localization of the PM. Conclusion: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.Comment: 12 pages, 9 figure

    TISS-net: Brain tumor image synthesis and segmentation using cascaded dual-task networks and error-prediction consistency

    Get PDF
    Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods

    A Novel Registration Algorithm for Intraoperative Surgical Guidance System with da Vinci Robot Platform

    No full text
    In robot-assisted prostatectomy (RAP), the transrectal ultrasound (TRUS) transducer plays an indispensable role in intraoperative navigation and monitoring due to its capability of differentiating the underlying tissue structure or molecular characteristics (e.g., prostate cancer, peripheral nerve network). The surgical outcomes are highly sensitive to the co-registration accuracy between the endoscopic camera and the TRUS frames. Based on the photoacoustic effect, this study focuses on the photoacoustic marker (PM) technique where the laser sources can be localized by the TRUS. Despite the high accuracy and reduced complications, this technique suffers from invisible laser spots in camera view, unstable visual detection of laser points through the endoscopic camera, and the time-consuming procedure where users manually rotate the transducer to localize the PM in the TRUS frame. These limitations severely impede the clinical application of this technique. To tackle them, an optimization-based algorithm is developed to co-register the two different imaging systems (i.e., endoscopic camera and TRUS) by leveraging the physical properties of the laser beam and photoacoustic imaging. We also investigate the performance of the proposed methods with computer simulations and phantom experiments in a robotic surgical environment. The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and phantom demonstration, the proposed algorithm achieved a sub-centimeter accuracy that satisfies the requirements for practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation)
    corecore